[6.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead#13224
Open
jonathan-bird wants to merge 2 commits intostatamic:6.xfrom
Conversation
Author
|
@jasonvarga I've gotten a little carried away with performance improvements and found this too. Keen to hear your thoughts. I'm not 100% sure the |
|
@jonathan-bird hi, any updates about this pr? would be very useful if released asap 💪 |
Member
|
Sorry, we don't have an ETA for reviewing/merging this pull request. We'll get to it when we can. In the meantime, you can pull this PR into your project with a composer patch. |
Author
|
Not yet, it's ready when the team are. Ideally for v6 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Optimises Stache store item retrieval by using batch cache operations for Redis, Memcached, and DynamoDB users.
getItems()method toBasicStorethat fetches multiple items efficientlyAggregateStorebefore delegatingcache()->many()for batch fetching (single MGET for Redis)Performance Impact
For Redis/Memcached users fetching many items (eg. search results). I wrote a little benchmark tool and this is more or less the summary of the speed improvements. I have only benchmarked using Redis but given they're all in-memory stores, it should be roughly the same for Dynamo/Memcached:
Approx 12x speedup for Redis users. The batch approach using MGET is much faster than N individual GET calls.
For File/Array cache users: no change (driver detection skips batch mode). With in-memory Array cache, the improvement is minimal because there's no network latency to optimise. The overhead of building the batch request can even make it slightly slower for small datasets, so it's skipped.
Use Cases
This improvement benefits any code path that retrieves multiple Stache items at once such as: